当真实数据有限,收集昂贵或由于隐私问题而无法使用时,合成表格数据生成至关重要。但是,生成高质量的合成数据具有挑战性。已经提出了几种基于概率,统计和生成的对抗网络(GAN)方法,用于合成表格数据生成。一旦生成,评估合成数据的质量就非常具有挑战性。文献中已经使用了一些传统指标,但缺乏共同,健壮和单一指标。这使得很难正确比较不同合成表格数据生成方法的有效性。在本文中,我们提出了一种新的通用度量,tabsyndex,以对合成数据进行强有力的评估。 TABSYNDEX通过不同的组件分数评估合成数据与实际数据的相似性,这些分量分数评估了“高质量”合成数据所需的特征。作为单个评分度量,TABSYNDEX也可以用来观察和评估基于神经网络的方法的训练。这将有助于获得更早的见解。此外,我们提出了几种基线模型,用于与现有生成模型对拟议评估度量的比较分析。
translated by 谷歌翻译
现代隐私法规授予公民被产品,服务和公司遗忘的权利。在机器学习(ML)应用程序的情况下,这不仅需要从存储档案中删除数据,而且还需要从ML模型中删除数据。由于对ML应用所需的监管依从性的需求越来越大,因此机器上的学习已成为一个新兴的研究问题。被遗忘的请求的权利是从已训练的ML模型中删除特定集或一类数据的形式的形式。实际考虑因素排除了模型的重新划分,从而减去已删除的数据。现有的少数研究使用了整个培训数据,或一部分培训数据,或者在培训期间存储的一些元数据以更新模型权重进行学习。但是,严格的监管合规性需要时间限制数据。因此,在许多情况下,即使是出于学习目的,也无法访问与培训过程或培训样本有关的数据。因此,我们提出一个问题:是否有可能使用零培训样本实现学习?在本文中,我们介绍了零击机的新问题,即适合极端但实用的方案,在该场景中,零原始数据样本可供使用。然后,我们根据(a)误差最小化最大化噪声和(b)门控知识传递的误差,提出了两种新的解决方案,以零发出的计算机学习。这些方法在保持保留数据上的模型疗效的同时,从模型中删除了忘记数据的信息。零射击方法可以很好地保护模型反转攻击和成员推理攻击。我们引入了新的评估度量,解散指数(AIN),以有效地测量未学习方法的质量。实验显示了在基准视觉数据集中深度学习模型中学习的有希望的结果。
translated by 谷歌翻译
摈弃机器学习(ML)模型的训练过程中观察到的数据是可以强化的基于ML-应用程序的隐私和安全方面发挥了举足轻重的作用的一项重要任务。本文提出了以下问题:(一),我们可以忘掉从ML模型数据的类/类,而在完整的训练数据看哪怕一次? (二)我们可以忘却快速和可扩展到大型数据集的过程,它推广到不同的深网络?我们引入错误最大化噪音的产生,损害修复基于重量操纵新机器忘却的框架,提供了一个有效的解决方案对上述问题。错误最大化的噪声矩阵学习了使用原始模型的不精通类。噪声矩阵用于操纵模型的权重忘却目标类的数据。我们引入了网络权的控制操作IMPAIR和修复步骤。在步骤IMPAIR,具有非常高的学习速率沿所述噪声矩阵被用于诱导尖锐忘却在模型中。此后,将修步骤用于重新获得的整体性能。除了极少数的更新步骤中,我们表现出优异的忘却,同时基本上保留了整个模型的准确性。摈弃多个类需要作为单独的类类似的更新的步数,使得我们的方法扩展到大的问题。我们的方法是相比于现有的方法非常有效,适用于多类忘却,不把任何约束的原始优化机制或网络设计,以及小型和大型视觉任务效果很好。这项工作是实现快速和容易实现在深网络忘却的重要一步。我们将源代码公开。
translated by 谷歌翻译
Computer tomography (CT) have been routinely used for the diagnosis of lung diseases and recently, during the pandemic, for detecting the infectivity and severity of COVID-19 disease. One of the major concerns in using ma-chine learning (ML) approaches for automatic processing of CT scan images in clinical setting is that these methods are trained on limited and biased sub-sets of publicly available COVID-19 data. This has raised concerns regarding the generalizability of these models on external datasets, not seen by the model during training. To address some of these issues, in this work CT scan images from confirmed COVID-19 data obtained from one of the largest public repositories, COVIDx CT 2A were used for training and internal vali-dation of machine learning models. For the external validation we generated Indian-COVID-19 CT dataset, an open-source repository containing 3D CT volumes and 12096 chest CT images from 288 COVID-19 patients from In-dia. Comparative performance evaluation of four state-of-the-art machine learning models, viz., a lightweight convolutional neural network (CNN), and three other CNN based deep learning (DL) models such as VGG-16, ResNet-50 and Inception-v3 in classifying CT images into three classes, viz., normal, non-covid pneumonia, and COVID-19 is carried out on these two datasets. Our analysis showed that the performance of all the models is comparable on the hold-out COVIDx CT 2A test set with 90% - 99% accuracies (96% for CNN), while on the external Indian-COVID-19 CT dataset a drop in the performance is observed for all the models (8% - 19%). The traditional ma-chine learning model, CNN performed the best on the external dataset (accu-racy 88%) in comparison to the deep learning models, indicating that a light-weight CNN is better generalizable on unseen data. The data and code are made available at https://github.com/aleesuss/c19.
translated by 谷歌翻译
基于概念的解释性方法旨在使用一组预定义的语义概念来解释深度神经网络模型的预测。这些方法在新的“探针”数据集上评估了训练有素的模型,并将模型预测与该数据集中标记的视觉概念相关联。尽管他们受欢迎,但他们的局限性并未被文献所理解和阐明。在这项工作中,我们分析了基于概念的解释中的三个常见因素。首先,选择探针数据集对生成的解释有深远的影响。我们的分析表明,不同的探针数据集可能会导致非常不同的解释,并表明这些解释在探针数据集之外不可概括。其次,我们发现探针数据集中的概念通常比他们声称要解释的课程更不太明显,更难学习,这使解释的正确性提出了质疑。我们认为,仅在基于概念的解释中才能使用视觉上的显着概念。最后,尽管现有方法使用了数百甚至数千个概念,但我们的人类研究揭示了32个或更少的概念更严格的上限,除此之外,这些解释实际上不太有用。我们对基于概念的解释性方法的未来发展和分析提出建议。可以在\ url {https://github.com/princetonvisualai/overlookedfactors}找到我们的分析和用户界面的代码。
translated by 谷歌翻译
在过去的十年中,深度学习模型在机器学习的不同领域取得了巨大的成功。但是,这些模型的大小和复杂性使它们难以理解。为了使它们更容易解释,最近的一些作品着重于通过人类解剖的语义属性来解释深神网络的部分。但是,仅使用语义属性完全解释复杂的模型可能是不可能的。在这项工作中,我们建议使用一小部分无法解释的功能来增强这些属性。具体而言,我们开发了一个新颖的解释框架(通过标记和未标记分解的解释),将模型的预测分解为两个部分:一个可以通过语义属性的线性组合来解释,而另一部分则取决于未解释的功能。 。通过识别后者,我们能够分析模型的“无法解释的”部分,从而了解模型使用的信息。我们表明,一组未标记的功能可以推广到具有相同功能空间的多种型号,并将我们的作品与两种流行的面向属性的方法,可解释的基础分解和概念瓶颈进行比较,并讨论Elude提供的其他见解。
translated by 谷歌翻译
由于机器学习越来越多地应用于高冲击,高风险域,因此有许多新方法旨在使AI模型更具人类解释。尽管最近的可解释性工作增长,但缺乏对所提出的技术的系统评价。在这项工作中,我们提出了一种新的人类评估框架蜂巢(可视化解释的人类可解释性),用于计算机愿景中的不同解释性方法;据我们所知,这是它的第一个工作。我们认为,人类研究应该是正确评估方法对人类用户的可解释方式的金标。虽然由于与成本,研究设计和跨方法比较相关的挑战,我们常常避免人类研究,但我们描述了我们的框架如何减轻这些问题并进行IRB批准的四种方法,这些方法是代表解释性的多样性:GradCam,Bagnet ,protopnet和prodotree。我们的结果表明,解释(无论它们是否实际正确)发芽人类信任,但用户对用户不够明确,以区分正确和不正确的预测。最后,我们还开展框架以实现未来的研究,并鼓励更多以人以人为本的解释方法。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
translated by 谷歌翻译
Real-world robotic grasping can be done robustly if a complete 3D Point Cloud Data (PCD) of an object is available. However, in practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action, leading to the generation of wrong or inaccurate grasp poses. We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial PCD to produce reliable grasp poses. Our proposed PCD completion network is a Transformer-based encoder-decoder network with an Offset-Attention layer. Our network is inherently invariant to the object pose and point's permutation, which generates PCDs that are geometrically consistent and completed properly. Experiments on a wide range of partial PCD show that 3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks and largely improves the grasping success rate in real-world scenarios. The code and dataset will be made available upon acceptance.
translated by 谷歌翻译